63 research outputs found

    On the Existence of Optimal Exact-Repair MDS Codes for Distributed Storage

    Full text link
    The high repair cost of (n,k) Maximum Distance Separable (MDS) erasure codes has recently motivated a new class of codes, called Regenerating Codes, that optimally trade off storage cost for repair bandwidth. In this paper, we address bandwidth-optimal (n,k,d) Exact-Repair MDS codes, which allow for any failed node to be repaired exactly with access to arbitrary d survivor nodes, where k<=d<=n-1. We show the existence of Exact-Repair MDS codes that achieve minimum repair bandwidth (matching the cutset lower bound) for arbitrary admissible (n,k,d), i.e., k<n and k<=d<=n-1. Our approach is based on interference alignment techniques and uses vector linear codes which allow to split symbols into arbitrarily small subsymbols.Comment: 20 pages, 6 figure

    Exact Regeneration Codes for Distributed Storage Repair Using Interference Alignment

    Full text link
    The high repair cost of (n,k) Maximum Distance Separable (MDS) erasure codes has recently motivated a new class of codes, called Regenerating Codes, that optimally trade off storage cost for repair bandwidth. On one end of this spectrum of Regenerating Codes are Minimum Storage Regenerating (MSR) codes that can match the minimum storage cost of MDS codes while also significantly reducing repair bandwidth. In this paper, we describe Exact-MSR codes which allow for any failed nodes (whether they are systematic or parity nodes) to be regenerated exactly rather than only functionally or information-equivalently. We show that Exact-MSR codes come with no loss of optimality with respect to random-network-coding based MSR codes (matching the cutset-based lower bound on repair bandwidth) for the cases of: (a) k/n <= 1/2; and (b) k <= 3. Our constructive approach is based on interference alignment techniques, and, unlike the previous class of random-network-coding based approaches, we provide explicit and deterministic coding schemes that require a finite-field size of at most 2(n-k).Comment: to be submitted to IEEE Transactions on Information Theor

    Degrees of Freedom of Uplink-Downlink Multiantenna Cellular Networks

    Full text link
    An uplink-downlink two-cell cellular network is studied in which the first base station (BS) with M1M_1 antennas receives independent messages from its N1N_1 serving users, while the second BS with M2M_2 antennas transmits independent messages to its N2N_2 serving users. That is, the first and second cells operate as uplink and downlink, respectively. Each user is assumed to have a single antenna. Under this uplink-downlink setting, the sum degrees of freedom (DoF) is completely characterized as the minimum of (N1N2+min(M1,N1)(N1N2)++min(M2,N2)(N2N1)+)/max(N1,N2)(N_1N_2+\min(M_1,N_1)(N_1-N_2)^++\min(M_2,N_2)(N_2-N_1)^+)/\max(N_1,N_2), M1+N2,M2+N1M_1+N_2,M_2+N_1, max(M1,M2)\max(M_1,M_2), and max(N1,N2)\max(N_1,N_2), where a+a^+ denotes max(0,a)\max(0,a). The result demonstrates that, for a broad class of network configurations, operating one of the two cells as uplink and the other cell as downlink can strictly improve the sum DoF compared to the conventional uplink or downlink operation, in which both cells operate as either uplink or downlink. The DoF gain from such uplink-downlink operation is further shown to be achievable for heterogeneous cellular networks having hotspots and with delayed channel state information.Comment: 22 pages, 11 figures, in revision for IEEE Transactions on Information Theor

    Convex Optimization for Machine Learning

    Get PDF
    This book covers an introduction to convex optimization, one of the powerful and tractable optimization problems that can be efficiently solved on a computer. The goal of the book is to help develop a sense of what convex optimization is, and how it can be used in a widening array of practical contexts with a particular emphasis on machine learning. The first part of the book covers core concepts of convex sets, convex functions, and related basic definitions that serve understanding convex optimization and its corresponding models. The second part deals with one very useful theory, called duality, which enables us to: (1) gain algorithmic insights; and (2) obtain an approximate solution to non-convex optimization problems which are often difficult to solve. The last part focuses on modern applications in machine learning and deep learning. A defining feature of this book is that it succinctly relates the “story” of how convex optimization plays a role, via historical examples and trending machine learning applications. Another key feature is that it includes programming implementation of a variety of machine learning algorithms inspired by optimization fundamentals, together with a brief tutorial of the used programming tools. The implementation is based on Python, CVXPY, and TensorFlow. This book does not follow a traditional textbook-style organization, but is streamlined via a series of lecture notes that are intimately related, centered around coherent themes and concepts. It serves as a textbook mainly for a senior-level undergraduate course, yet is also suitable for a first-year graduate course. Readers benefit from having a good background in linear algebra, some exposure to probability, and basic familiarity with Python

    Computation in Multicast Networks: Function Alignment and Converse Theorems

    Full text link
    The classical problem in network coding theory considers communication over multicast networks. Multiple transmitters send independent messages to multiple receivers which decode the same set of messages. In this work, computation over multicast networks is considered: each receiver decodes an identical function of the original messages. For a countably infinite class of two-transmitter two-receiver single-hop linear deterministic networks, the computing capacity is characterized for a linear function (modulo-2 sum) of Bernoulli sources. Inspired by the geometric concept of interference alignment in networks, a new achievable coding scheme called function alignment is introduced. A new converse theorem is established that is tighter than cut-set based and genie-aided bounds. Computation (vs. communication) over multicast networks requires additional analysis to account for multiple receivers sharing a network's computational resources. We also develop a network decomposition theorem which identifies elementary parallel subnetworks that can constitute an original network without loss of optimality. The decomposition theorem provides a conceptually-simpler algebraic proof of achievability that generalizes to LL-transmitter LL-receiver networks.Comment: to appear in the IEEE Transactions on Information Theor
    corecore